We present a robust algorithm for complex human activity recognition for natural human-robot interaction. The algorithm is\nbased on tracking the position of selected joints in human skeleton. For any given activity, only a few skeleton joints are involved\nin performing the activity, so a subset of joints contributing the most towards the activity is selected. Our approach of tracking\na subset of skeleton joints (instead of tracking the whole skeleton) is computationally efficient and provides better recognition\naccuracy. We have developed both manual and automatic approaches for the selection of these joints.The position of the selected\njoints is tracked for the duration of the activity and is used to construct feature vectors for each activity. Once the feature vectors\nhave been constructed, we use a Support Vector Machines (SVM) multiclass classifier for training and testing the algorithm. The\nalgorithm has been tested on a purposely built dataset of depth videos recorded using Kinect camera. The dataset consists of 250\nvideos of 10 different activities being performed by different users. Experimental results show classification accuracy of 83% when\ntracking all skeleton joints, 95% when using manual selection of subset joints, and 89% when using automatic selection of subset\njoints.
Loading....